America May Not Need a Massive Energy Build-Out to Power the AI Revolution

A new Duke University study argues that the existing U.S. electricity system already has the capacity to power massive additions of data centers that will be needed for the further development of artificial intelligence.
February 11, 2025 10:11 am (EST)

- Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
Amid surging demand for data centers to train and use cutting-edge artificial intelligence (AI) models, there is a consensus that an expensive build-out of America’s electric power system is unavoidable. President Trump declared a national energy emergency on the first day of his administration and has pledged to fast-track new power plants to fuel the data center boom. Meanwhile, electric power rates for homes and businesses are set to rise sharply across the country to pay for tens of billions of dollars of planned grid upgrades to accommodate the data center boom. Powering these new data centers is a bipartisan priority for policymakers concerned about U.S. competitiveness in AI, especially after the release of the Chinese DeepSeek-R1 model.
A stunning new report out today from Duke University argues that the existing U.S. electricity system already has the “headroom” to power massive additions of data centers with no new grid or power plant infrastructure. The catch? New data centers need to incorporate a limited amount of flexibility in when they consume power, ramping down their use during rare hours during the year when regional power grids experience peak stress. Armed with this capability, new data centers could connect swiftly to existing regional power grids, the Duke researchers argue, without compromising grid reliability or waiting up to a decade for expensive new infrastructure to get built.
More on:
The magnitudes of what is already possible today are mind-boggling. As a reference point, consider Project Stargate, the data center megaproject announced by President Trump, which aims to invest $500 billion and could consume fifteen to twenty-five GW of power. The Duke report concludes that roughly 100 GW, equivalent to four to five Project Stargates (more than $2 trillion in data center investments), could be connected in the near term to power grids across the United States with no new power supply or delivery infrastructure upgrades (Fig 1). Another way to think about this is that the capacity of new data centers that could be connected immediately represents more power demand than the entire U.S. fleet of ninety-four nuclear reactors can supply today.
Figure 1: Additional data center capacity that could be connected in each regional power grid if new data centers offered 0.5% curtailment capability (Source: Norris et al, 2025)
In contrast to these gargantuan estimates of new data center capacity that could be connected today, the requirements for flexibility from new data centers are comparatively miniscule. Over the course of a year, these new data centers could still use 99.5 percent of the electricity they would otherwise be entitled to use if they did not offer the grid any flexibility. They would only commit to reducing their power by, on average, 25 percent for two hours at a time during periods of peak system stress, for a total of fewer than 200 hours per year.
So why isn’t this already being done? The answer is likely the strong historical precedent that data centers typically do not provide any flexibility in their power consumption, instead requiring extremely strict uptime standards to guarantee nearly 100 percent uptime to users of cloud computing applications. Yet the technologies to achieve flexibility are being rapidly developed in light of the growing difficulties in connecting to the grid as inflexible power loads (for example, in Northern Virginia’s “Data Center Alley,” the power utility quotes a more than seven-year wait to connect a data center requiring more than 100MW of power).
One approach to create flexibility is to site batteries or generators alongside data centers. An even cheaper option that avoids the capital costs of new equipment is to orchestrate computational workloads across one or many data centers to precisely control their power consumption while maintaining acceptable levels of service quality for compute users (see, e.g., Coskun et al., 2024 and Mehra et al. (Google), 2023).
More on:
The most tantalizing implication of the Duke report is a connection that the authors don’t explicitly make. If AI data centers—the fastest-growing source of power demand in the United States—can provide flexibility services to the grid, they could actually become the grid’s most valuable resource. Flexible data centers, acting individually or in concert with one another across the country, could shore up the grid’s reliability during periods of peak usage or strain and better utilize a power grid that’s been built to service extreme peaks in demand. In light of this enticing promise of a free lunch—vastly more data center capacity with minimal new power and grid investments—electric power and technology industry giants have joined forces and formed the EPRI DC Flex consortium to further develop data center flexibility.
Prioritizing demand flexibility can buy time for power system planners to make prudent investments in the coming decade, from modernizing aging distribution infrastructure to building long-distance transmission to scaling up emerging nuclear power technologies. But an excessive and hasty power system build-out endangers public acceptance of the AI revolution as power prices rise, and that in turn could undermine U.S. competitiveness in AI. Data center power flexibility, therefore, is a capability well worth investing in to take full advantage of the resources we already have.
Varun Sivaram is senior fellow for energy and climate at the Council on Foreign Relations and Founder and CEO of Emerald AI, which develops orchestration software for computational power use.